3 research outputs found
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Quantitative assessment of the abdominal region from clinically acquired CT
scans requires the simultaneous segmentation of abdominal organs. Thanks to the
availability of high-performance computational resources, deep learning-based
methods have resulted in state-of-the-art performance for the segmentation of
3D abdominal CT scans. However, the complex characterization of organs with
fuzzy boundaries prevents the deep learning methods from accurately segmenting
these anatomical organs. Specifically, the voxels on the boundary of organs are
more vulnerable to misprediction due to the highly-varying intensity of
inter-organ boundaries. This paper investigates the possibility of improving
the abdominal image segmentation performance of the existing 3D encoder-decoder
networks by leveraging organ-boundary prediction as a complementary task. To
address the problem of abdominal multi-organ segmentation, we train the 3D
encoder-decoder network to simultaneously segment the abdominal organs and
their corresponding boundaries in CT scans via multi-task learning. The network
is trained end-to-end using a loss function that combines two task-specific
losses, i.e., complete organ segmentation loss and boundary prediction loss. We
explore two different network topologies based on the extent of weights shared
between the two tasks within a unified multi-task framework. To evaluate the
utilization of complementary boundary prediction task in improving the
abdominal multi-organ segmentation, we use three state-of-the-art
encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The
effectiveness of utilizing the organs' boundary information for abdominal
multi-organ segmentation is evaluated on two publically available abdominal CT
datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean
Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Quantitative assessment of the abdominal region from CT scans requires the accurate delineation of abdominal organs. Therefore, automatic abdominal image segmentation has been the subject of intensive research for the past two decades. Recently, deep learning-based methods have resulted in state-of-the-art performance for the 3D abdominal CT segmentation. However, the complex characterization of abdominal organs with weak boundaries prevents the deep learning methods from accurate segmentation. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensities. This paper proposes a method for improved abdominal image segmentation by leveraging organ-boundary prediction as a complementary task. We train 3D encoder-decoder networks to simultaneously segment the abdominal organs and their boundaries via multi-task learning. We explore two network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. In the first topology, the whole-organ prediction task and the boundary detection task share all the layers in the network except for the last task-specific layers. The second topology employs a single shared encoder but two separate task-specific decoders. The effectiveness of utilizing the organs’ boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets: Pancreas-CT and the BTCV dataset. The improvements shown in segmentation results reveal the advantage of the multi-task training that forces the network to pay attention to ambiguous boundaries of organs. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively